The missing rung in the FOSS assurance ladder is cost sharing

I spent a couple of evenings researching the b4mad.industries proposal for a Standard and Criteria Catalog for Fair, Transparent, and Sustainable Cost Sharing in FOSS Components and Dependencies. Going in, I assumed the hard part would be choosing between competing definitions of “fair.” Coming out, I’m convinced the more interesting finding is something else entirely: the FOSS assurance stack has a missing rung, and nobody is standing on it.

What I Found

We already have a tall stack of FOSS standards. SPDX/REUSE tell you what is in the software. ISO/IEC 5230 and 18974 (OpenChain) tell you whether the consuming organisation has a credible compliance and security program. CHAOSS measures project health. OpenSSF Criticality Score and Census II rank how important a dependency is. Tidelift, Open Collective, GitHub Sponsors, Sovereign Tech Fund, NLnet, Drips, Gitcoin Quadratic Funding, Optimism RetroPGF, ecosyste.ms Funds, Open Source Pledge all move money. Across all of these, not one defines a fair share, a disclosure schema, or a binding between dependency identifiers and funding flows [Source 1, 6, 7]. CHAOSS is closest — they have a Funding Working Group with a 2025 Practitioner Guide — but the guide explicitly states that “no single universal framework exists” and recommends customised approaches per funder.

Security Is the Bottleneck: A Position Paper on Security-First Agent Architecture

As AI agent capabilities scale rapidly, the limiting factor for broad adoption is no longer model intelligence — it is security. Lex Fridman crystallized this in his widely-shared analysis: “security will become THE bottleneck for effectiveness and usefulness of AI agents.” This paper argues that the agent security problem is the primary differentiator in the emerging agent ecosystem, not model quality. We present the access–risk–usefulness triangle as a framework for reasoning about agent deployment, analyze why the current “YOLO mode” of agent usage cannot scale, and describe #B4mad’s architecture as a concrete, working implementation of security-first agent design.

Building Agent Discovery: Technical Patterns from Registry to Agent2Agent Communication

Abstract

The vision of million-agent networks is compelling, but how do you actually build the discovery infrastructure to make it real? This article bridges the gap between theory and implementation, exploring practical patterns emerging from registry experiments, the Model Context Protocol (MCP) revolution, and production deployments.

We’ll examine four concrete approaches: DNS-based discovery, registry APIs, well-known URLs, and dynamic tool discovery through MCP. You’ll see how MCP acts as the “USB-C port for AI applications,” enabling runtime capability enumeration without hardcoded integrations. We’ll also tackle critical production challenges: the multiple context problem that fragments agent memory, security patterns for enterprise deployment, and the architectural decisions that determine whether your agent network scales or stalls at 1,000 agents.

The Million-Agent Vision: Why Discovery is the Critical Infrastructure Gap

A million AI agents collaborating, discovering each other, composing capabilities. We’re nowhere near. Today’s agents are integrated by hand, one at a time, on whatever protocol the vendor picked that week.

This is a discovery problem. We solved it for websites with DNS, for microservices with service meshes. For agents, we haven’t. Whoever does owns the next layer: an Agent Registration System, an Agent Naming Service, an Agent Gateway.

The interesting threshold sits at 10,000 agents. Below it, networks behave linearly and you can muscle through with manual integration. Above it, they self-organise. GPT Store crossed that line in January 2024 and growth went exponential — same platform, different physics.

Agent-first API design for parliamentary meeting data

Modern APIs designed for agent consumption require fundamentally different priorities than traditional human-developer interfaces. For a GraphQL API serving parliamentary meeting data, the transformation from human-first to agent-first design demands semantic precision, structural consistency, and machine-interpretable documentation while supporting diverse agent types from LLMs to web scrapers.

Core principles differentiate agent and human design

Agent-first API design prioritizes machine interpretability over developer convenience. Where human-focused APIs tolerate ambiguity through context and documentation, agent-first interfaces demand unambiguous semantic meaning in every field, consistent patterns across all endpoints, and self-describing capabilities through structured metadata. The shift represents moving from flexible, multi-path approaches that humans navigate intuitively to single, deterministic paths that machines can reliably traverse.

The Unwritten Rules of Sustainable Open Source: A Comprehensive Guide

Open source projects that survive decades share a secret: they prioritize human connections over code quality, build trust through transparent governance, and treat disagreements as opportunities for innovation rather than threats to cohesion. This comprehensive research reveals the patterns that distinguish thriving communities from those destined to burn out, drawing from academic studies, maintainer experiences, and lessons from projects that have endured since the early days of the internet.

Beyond the Code: The Human Infrastructure of Successful Projects

The Apache Software Foundation’s enduring principle “Community Over Code” represents more than philosophy—it’s a survival strategy backed by decades of evidence. Analysis from the Linux Foundation reveals that 23 of 30 highest-velocity open source projects are backed by either foundations or corporations, providing what researchers call the “janitor functions” necessary for large-scale project management: triaging bugs, answering user questions, handling legal issues, and maintaining long-term stability.